This article provides a step-by-step guide on fine-tuning the Florence-2 model for object detection tasks, including loading the pre-trained model, fine-tuning with a custom dataset, and evaluating the model's performance.
A light-weight codebase that enables memory-efficient and performant finetuning of Mistral's models. It is based on LoRA, a training paradigm where most weights are frozen and only 1-2% additional weights in the form of low-rank matrix perturbations are trained.
"The paper introduces a technique called LoReFT (Low-rank Linear Subspace ReFT). Similar to LoRA (Low Rank Adaptation), it uses low-rank approximations to intervene on hidden representations. It shows that linear subspaces contain rich semantics that can be manipulated to steer model behaviors."
Introduces proxy-tuning, a lightweight decoding-time algorithm that operates on top of black-box LMs to achieve the same end as direct tuning. The method tunes a smaller LM, then applies the difference between the predictions of the small tuned and untuned LMs to shift the original predictions of the larger untuned model in the direction of tuning, while retaining the benefits of larger-scale pretraining.
- GitHub repository for a tutorial series called "0 to LitGPT."
- Provides an overview of how to get started with LitGPT, which is an open-source implementation of GPT-3.
- Offers various resources such as codes, issues, pull requests, actions, security features, insights, and more related to the LitGPT project.
- Discusses the use of consumer graphics cards for fine-tuning large language models (LLMs)
- Compares consumer graphics cards, such as NVIDIA GeForce RTX Series GPUs, to data center and cloud computing GPUs
- Highlights the differences in GPU memory and price between consumer and data center GPUs
- Shares the author's experience using a GeForce 3090 RTX card with 24GB of GPU memory for fine-tuning LLMs